perm filename NSF.84[W84,JMC]4 blob sn#750935 filedate 1984-04-19 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00006 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	nsf.84[w84,jmc]		1984 NSF proposal for Basic Research in AI
C00009 00003		This is an accomplishment-based renewal proposal.
C00019 00004	Artificial intelligence requires formal languages that have the
C00027 00005	Proposed research
C00028 00006	Proposed Research
C00029 ENDMK
C⊗;
nsf.84[w84,jmc]		1984 NSF proposal for Basic Research in AI

Proposed Research

	The long term goal of my research has always been to
find ways of formalizing common sense knowledge and reasoning ability
so that a computer program can have common sense knowledge
and common sense ability to achieve the goals we give it.

	We distinguish common sense knowledge and common sense ability.
Common sense knowledge includes facts about how events occurr in
time, about the effects of actions by the knower and others, about
physical objects and how they are perceived, and about their properties
and their relations to one another.  An example is the fact that
eggs contain a yolk and a white and a shell, how to recognize an
egg, the effects of hard boiling them and the effects of dropping them.
Common sense ability involves the use of common sense knowledge and
the observation of the world to decide what to do to achieve one's
goals.  The "common" in "common sense" refers to the fact that a
large amount of this knowledge and ability is common to all humans.
Not much of it is understood well enough to include it in computer
programs.

	Expressing common sense knowledge in languages of mathematical
logic and using controlled deduction as a way of deciding what to
do was first proposed in (McCarthy 1960).  This proved very difficult
and progress was slow.  Many people gave up on using logic in AI
and proposed other formalisms, but in the main they proved to be
equivalent to subsets of logical languages, and much attention in
AI has returned to logic.

	A major advance in the use of logical languages for AI was
the notion of formalized non-monotonic reasoning which arose in
the middle 1970s.  My proposal was called circumscription
(McCarthy 1980).

	The idea of non-monotonic reasoning and the circumscription
approach to formalizing it seem to have wider applicability than
was previously envisioned.  We plan to continue our previous
investigations of formalizing common sense knowledge using circumscription
as a tool.  However, there seem to be many new applications in AI
but also in databases, programming languages and even in analytic
philosophy.  Our chief objective in the next three
years will be to explore these new possibilities.  Here are some of them.

1. Neither the languages of mathematical logic nor present programming
languages have the power of natural language, although both provide
necessary supplements to natural language in their respective domains.  In
order to have fully intelligent computer programs, we need to understand
what is powerful about natural language and how to give this power
to programs.  Naturally opinions differ about where this power comes
from.  Some suppose it comes simply from the irregularity of natural
language, but irregularity per se does not give power.

Some suggestions have been made to use natural language as an
internal language for AI, but so far nothing has been done that
isn't equivalent to a subset of first order logic.  Moreover, the
natural language front ends that have been developed just manage
to express in natural language what we already know how to express
in some kind of computerese.  This has the problem backwards.
Rather natural language must teach us to express in a computer
things that we have not previously been able to represent at all.

Our work on non-monotonic reasoning has provided a clue to a part
of the greater expressive power of natural language.

Present mathematical logic is based on the idea of an interpretation
of a language.

1. Non-monotonic programming languages.  With few and unimportant
exceptions present programming languages and their compilers are
monotonic in the following sense.  Each construction in the source
language, except declarations, gives rise to code in the target
language.  However, human descriptions of procedures often do not
have this character.

	
	This is an accomplishment-based renewal proposal.

	Four papers, one report, and one joint paper accepted for
publication are the published basis for this renewal.  Other
recent work not yet submitted for publication will be mentioned.

The papers are:

%3McCarthy, John (1979)%1:
"Ascribing Mental Qualities to Machines" in %2Philosophical Perspectives 
in Artificial Intelligence%1, Ringle, Martin (ed.), Harvester Press, July 1979.
.<<aim 326, MENTAL[F76,JMC]>>

Computer programs vary much more than people in the extent and nature
of the intentional properties that may be usefully ascribed to them.
This paper outlines criteria for ascribing specific mental qualities,
e.g specific beliefs and goals, to computer programs and machines in
general.  A popular version appeared in %2Psychology Today%1, December
1983.  John Perry expressed interest in including one or the other
in his forthcoming %2Oxford Readings in Philosophy%1 volume.

%3McCarthy, John (1979)%1: 
"First Order Theories of Individual Concepts and Propositions", 
in Michie, Donald (ed.) %2Machine Intelligence 9%1, (University of
Edinburgh Press, Edinburgh).
.<<aim 325,concep[e76,jmc]>>

This paper presents a variety of formalisms for expressing facts
about knowledge and belief.  What was new was sticking to first
order logic but keeping the objects of belief, etc. as abstract
objects rather than making them sentences.  With the aid of
notions of standard concepts certain puzzling sentences discussed
in the philosophical literature can be neatly expressed in a form
that permits derivation of their consequences.  However, the
motivation for the work is AI not philosophy.

%3McCarthy, John (1980)%1: 
"Circumscription - A Form of Non-Monotonic Reasoning", %2Artificial
Intelligence%1, Volume 13, Numbers 1,2, April.
.<<aim 334, circum.new[s79,jmc]>>

This is the most important of the six papers and reports and the
one that has received the most attention from other scientists.
The need for non-monotonic reasoning in AI and in database theory
became apparent in the middle 70s and several systems were proposed.
The issue of %2Artificial Intelligence%1 in which this paper appeared
contains two other proposals.  I believe that circumscription is
the most successful of them.  Further papers on circumscription were
written by Raymond Reiter of the University of British Columbia,
Jack Minker of the University of Maryland and Vladimir Lifschitz of
the University of Texas at El Paso.  I have a new paper on circumscription,
but there is one problem I would like to solve before publishing it.

%3McCarthy, John (1982)%1: "Common Business Communication Language", in
%2Textverarbeitung und B%B:%*urosysteme%1, Albert Endres and J%B:%*urgen Reetz, eds.
R. Oldenbourg Verlag, Munich and Vienna 1982.

Since much office work consists in communicating with other organizations,
increasing office productivity requires that computers belonging to different
organizations communicate with one another about business matters.
The paradigm case involves a cost analysis program getting bids for
components from the computers belonging to potential suppliers.  The
idea started as a simple proposal for standardization, but it turned
up important unsolved problems in the semantics and pragmatics of
natural language.  Indeed it led to the conclusion that putting
natural language front ends on existing programs is the wrong problem.
The point isn't to express in natural language what we already express
in some kind of computerese, but to express formally concepts, facts
and requests that have heretofore only been expressable in natural
language.

%3Gabriel, Richard P. and John McCarthy (1984)%1: "Queue-based
Multi-processing Lisp", accepted for publication in the proceedings
of the 1984 Lisp Conference to be held in Austin in August.

Getting greater performance from computers requires parallelism,
and LISP has traditionally been regarded as a somewhat recalcitrant
language from this point of view.  The paper contains
a few constructs to be added to LISP to make efficient use of
parallel processors.  The original versions of the ideas are mine,
but their subsequent development and all the implementation work
are my co-author's.

%3McCarthy, John (1982)%1: %2Coloring Maps and the Kowalski Doctrine%1,
Report No. STAN-CS-82-903, Computer Science Department, Stanford University,
Stanford, CA 94305.

Robert Kowalski, who, along with Alain Colmerauer, originated logic
programming, expressed the doctrine "Algorithm equals logic plus
control".  Several authors (including Keith Clark, Luis Pereira and
Herv%B`%*e Gallaire),
most prominently in connection with
logic programming, have proposed formalisms in which logic and
control can be expressed separately.  The advantage is this.  The
logic of an algorithm can often be expressed in a simple way that
is easy to understand and in terms of which, the algorithm can
be proved to give the desired results.  However, efficient execution
often requires complicated control.  It seemed to me that the
control schemes that have been proposed are very limited, and
that the problem of coloring maps presents some interesting
problems of separating logic and control.  Both the postponement
control and the Kempe control discussed in the article have more
general applications.  After the referenced report was written,
further work done during a visit to Kowalski at Imperial College
resulted in a notion of "introspective programming" in which
a Prolog program can look at its own goal and search trees.
A small "introspective Prolog interpreter" was implemented by
Peter Szeredi of the Hungarian Academy of Sciences, and publication
of the paper awaits incorporation of the idea of introspection
and reference to Szeredi's (unpublished) work.
Artificial intelligence requires formal languages that have the
flexibility of natural language, but it is difficult to make
precise what this flexibility is.  One aspect comes from the fact
that the concepts used in natural language do not have to have
general precise definitions or even to be fully understood in order to be usable.
Indeed a large part of philosophy is involved in trying to make
precise concepts from natural language.  This philosophical work
typically involves take a concept that people think they understand
and showing, by inventing hypothetical cases, that we are unsure
about what it means in general.  However, even when uncertainty
about a concept is demonstrated and remains unresolved, people
still use the concept successfully in its previous domain.
If an artificial intelligence is to use these concepts, we
must build it with a capability like that of a human to use
concepts about whose meaning problems remain.  Otherwise, we
will have to solve all the philosophical problems of meaning
before we can begin on artificial intelligence.

Our goal, therefore, is to formalize a certain class of imprecise
concepts.  We plan to use ordinary logic coupled with non-monotonic
inference, most likely circumscription.  The idea is that a
particular usage of a concept is presumed meaningful and unambiguous
unless there is evidence to the contrary.

An example will help make the problem and our goal clear.  Suppose
we want to build a legal expert system to advise a district attorney
about what indictments will be supported by the facts of a particular
case.  Suppose that one of the relevant laws passed by a legislature
makes it a crime to attempt to bribe a public official.  Thus our
program should sometimes recommend an indictment for violating
this law.

	In an environment without expert systems such a law may
be on the books for many years and many cases tried before the
following possibilities for ambiguity are noted.

De dicto defense:

	A lawyer offers the following defense.  "You have offered
evidence that my client offered the Commissioner of Motor Vehicles
α$5,000 to avoid losing his license for drunk driving, but you
haven't proved that my client knew the man was the Commisioner.
My client may have thought he was just a lawyer.  Surely attempting
to bribe a public official requires that the briber know the
person is a public official."

De re defense:

	Another lawyer offers a different defense.  "You have offered
evidence that my client offere α$5,000 to this man under the impression
that he was the Commissioner, but his appointment hadn't started yet
in the new administration, so he wasn't in fact a public official.
Surely to attempt to bribe a public official, the person to whom
the bribe is offered must actually be a public official".

Indefiniteness defense:

	"It is true my client advertised in the %2Criminal Gazette%1
that he would pay α$5,000 to any public official who would fix
his drunk driving conviction, but you haven't exhibited a specific
public official who was likely to read the advertisement.  Surely
an attempt to bribe a public official requires a public official
one is attempting to bribe".

	Our present point is not the resolution of such ambiguities in which
both lawyers and philosophers have been interested.  Instead we
are interested in the fact that indictments have been sought
and cases tried in which no participant was aware of the possibility
of ambiguity in the law.  Indeed these particular ambiguities don't
arise in a case where a specific public official whom the defendant
knew to be a public official is involved.  In fact even after an
ambiguity has been discovered and a case involving it is still
in litigation, cases not involving the ambiguity are resolved
as before.

	Our proposed resolution of the problem involves a non-monotonic
rule that a concept is to be regarded as unambiguous in all cases
where an ambiguity cannot be exhibited.

	A quite different kind of potential ambiguity arises in the
case of the word "Stanford".  We aren't for the moment interested
in its interpretation as designating a person, but suppose the
trustees of Stanford University decided to move the University
to Texas.  Suddenly many references to Stanford would become
ambiguous in ways no-one previously contemplated.

	Our goal is to make a computer system that could use such
a concept without ever noticing a specific ambiguity.  If an
ambiguity arose it might be "puzzled", but it wouldn't lose its
ability to use the concept in unambiguous cases.

	Our actual goal with regard to using non-monotonic
reasoning to make more flexible logical languages is not quite
as precise as the above would suggest.  There are many interesting
phenomena here, and we propose to explore them.
Proposed research

1. Formalization of common sense knowledge and reasoning

2. Non-monotonicity

3. Programming without looking

4. CBCL

5. Mathematical Theory of Computation and programming languages
Proposed Research

	As the submitted papers show, I have been active in a variety
of fields in the last five years.  Nevertheless, my main activity
has been in the formalization in logic of common sense knowledge
and common sense reasoning, and I propose to continue this.

	I see the following problems as presenting opportunities
for progress in the formalization of common sense and its application
to artificial intelligence.